3 research outputs found
NeU-NBV: Next Best View Planning Using Uncertainty Estimation in Image-Based Neural Rendering
Autonomous robotic tasks require actively perceiving the environment to
achieve application-specific goals. In this paper, we address the problem of
positioning an RGB camera to collect the most informative images to represent
an unknown scene, given a limited measurement budget. We propose a novel
mapless planning framework to iteratively plan the next best camera view based
on collected image measurements. A key aspect of our approach is a new
technique for uncertainty estimation in image-based neural rendering, which
guides measurement acquisition at the most uncertain view among view
candidates, thus maximising the information value during data collection. By
incrementally adding new measurements into our image collection, our approach
efficiently explores an unknown scene in a mapless manner. We show that our
uncertainty estimation is generalisable and valuable for view planning in
unknown scenes. Our planning experiments using synthetic and real-world data
verify that our uncertainty-guided approach finds informative images leading to
more accurate scene representations when compared against baselines.Comment: Accepted to IEEE/RSJ International Conference on Robotics and
Intelligent Systems (IROS) 202
An Informative Path Planning Framework for Active Learning in UAV-based Semantic Mapping
Unmanned aerial vehicles (UAVs) are frequently used for aerial mapping and
general monitoring tasks. Recent progress in deep learning enabled automated
semantic segmentation of imagery to facilitate the interpretation of
large-scale complex environments. Commonly used supervised deep learning for
segmentation relies on large amounts of pixel-wise labelled data, which is
tedious and costly to annotate. The domain-specific visual appearance of aerial
environments often prevents the usage of models pre-trained on publicly
available datasets. To address this, we propose a novel general planning
framework for UAVs to autonomously acquire informative training images for
model re-training. We leverage multiple acquisition functions and fuse them
into probabilistic terrain maps. Our framework combines the mapped acquisition
function information into the UAV's planning objectives. In this way, the UAV
adaptively acquires informative aerial images to be manually labelled for model
re-training. Experimental results on real-world data and in a photorealistic
simulation show that our framework maximises model performance and drastically
reduces labelling efforts. Our map-based planners outperform state-of-the-art
local planning.Comment: 18 pages, 24 figure
Graph-based View Motion Planning for Fruit Detection
Crop monitoring is crucial for maximizing agricultural productivity and
efficiency. However, monitoring large and complex structures such as sweet
pepper plants presents significant challenges, especially due to frequent
occlusions of the fruits. Traditional next-best view planning can lead to
unstructured and inefficient coverage of the crops. To address this, we propose
a novel view motion planner that builds a graph network of viable view poses
and trajectories between nearby poses, thereby considering robot motion
constraints. The planner searches the graphs for view sequences with the
highest accumulated information gain, allowing for efficient pepper plant
monitoring while minimizing occlusions. The generated view poses aim at both
sufficiently covering already detected and discovering new fruits. The graph
and the corresponding best view pose sequence are computed with a limited
horizon and are adaptively updated in fixed time intervals as the system
gathers new information. We demonstrate the effectiveness of our approach
through simulated and real-world experiments using a robotic arm equipped with
an RGB-D camera and mounted on a trolley. As the experimental results show, our
planner produces view pose sequences to systematically cover the crops and
leads to increased fruit coverage when given a limited time in comparison to a
state-of-the-art single next-best view planner.Comment: 7 pages, 10 figures, accepted at IROS 202